We can try and move objects by modifying vertices and re-configuring buffers per frame but this is costly and cumbersome
Instead we use matrices
Vectors & Matrices
If we want to visualize vectors as positions, we use the position vector that originates from the origin
The unit vector is of unit length in any direction. Calculating the unit vector of a given vector (dividing by magnitude) is called normalizing the vector
The dot product is the component wise multiplication of vectors and is also the product of magnitude of vectors with the cosine of the angle between them
The cross product is defined in the 3D space only and produces a orthogonal vector given 2 vectors. Refer to link for calculation
The dot product in matrices is basically common matrix multiplication where Aa,b⋅Bb,c=Ca,c
A vector is basically a N×1 matrix and hence can be multiplied with matrices to give transformations
Identity matrix I does an identity transform
Scale matrix is basically I but each 1 can be a different value based on how much you want each axes to scale
Translation matrix is used to alter the position of the vector. For a 3D vector, it is the top 3 values of the 4th column as that results in each component being added to the corresponding value.
Rotation matrix
Rotations in 3D are specified with an angle and a rotation axis
Rotations are done with a combination of cosine and sine
An example of a rotation about X axis10000cosθsinθ00−sinθcosθ00001⋅xyz1=xcosθ⋅y−sinθ⋅zsinθ⋅y+cosθ⋅z1
To rotate about a 3D axis, rotate around X, then Y and lastly Z, however this produces a gimbal lock. Another better way is to create an arbitrary unit axis and rotate about it cosθ+Rx2(1−cosθ)RyRx(1−cosθ)+RzsinθRzRx(1−cosθ)−Rysinθ0RxRy(1−cosθ)−Rzsinθcosθ+Ry2(1−cosθ)RzRy(1−cosθ)+Rxsinθ0RxRz(1−cosθ)+RysinθRyRz(1−cosθ)−Rxsinθcosθ+Rz2(1−cosθ)00001
This too does not solve the problem entirely and the solution is to use quaternions.
We can combine several transforms with matrix multiplication. The rightmost in multiplication is first applied to the vector. A recommended order is scaling -> rotation -> translation
All of these can be implemented using glm and the matrices can be sent to the shaders with uniform's with type mat4
If the uniform's have to be changed frequently, put them in the loop
Coordinate Systems
Intro
As we know, GL expects all vertices to be normalized. What we do in practice is send coords in a range we specify and normalize it in vertex shader - normalized device coords (NDC).
The process of creating NDC involves many intermediate coord system reprs due to ease of operation/calculation
The coord systems of importance are
Local space (or Object space)
World space
View space (or Eye space)
Clip space
Screen space
Transformation matrices
To transform to a new coord system, we use the model, view and the projection matrices.
These are then processed into world, view and clip coords and eventually to screen coords.
Local coords - coords of object relative to local origin
World space coords - transform the local to world space which involves a larger world, relative to world origin
View space coords - transform to view space such that each object is as seen from the camera
Clip coords - clip the screen from -1 to 1 and determine what is shown
Screen coords - a viewport transform, converts -1 to 1 to range defined by glViewport
Local space
Coord space local to object
Example: The center of a cube - as shown in a new blender file
World space
All objects are positioned w.r.t world origin of (0,0,0)